287 research outputs found

    Probing pictorial relief: from experimental design to surface reconstruction

    Get PDF
    The perception of pictorial surfaces has been studied quantitatively for more than 20 years. During this time, the “gauge figure method” has been shown to be a fast and intuitive method to quantify pictorial relief. In this method, observers have to adjust the attitude of a gauge figure such that it appears to lie flat on a surface in pictorial space. Although the method has received substantial attention in the literature and has become increasingly popular, a clear, step-by-step description has not been published yet. In this article, a detailed description of the method is provided: stimulus and sample preparation, performing the experiment, and reconstructing a 3-D surface from the experimental data. Furthermore, software (written in PsychToolbox) based on this description is provided in an online supplement. This report serves three purposes: First, it facilitates experimenters who want to use the gauge figure task but have been unable to design it, due to the lack of information in the literature. Second, the detailed description can facilitate the design of software for various other platforms, possibly Web-based. Third, the method described in this article is extended to objects with holes and inner contours. This class of objects have not yet been investigated with the gauge figure task

    Separable time-causal and time-recursive spatio-temporal receptive fields

    Full text link
    We present an improved model and theory for time-causal and time-recursive spatio-temporal receptive fields, obtained by a combination of Gaussian receptive fields over the spatial domain and first-order integrators or equivalently truncated exponential filters coupled in cascade over the temporal domain. Compared to previous spatio-temporal scale-space formulations in terms of non-enhancement of local extrema or scale invariance, these receptive fields are based on different scale-space axiomatics over time by ensuring non-creation of new local extrema or zero-crossings with increasing temporal scale. Specifically, extensions are presented about parameterizing the intermediate temporal scale levels, analysing the resulting temporal dynamics and transferring the theory to a discrete implementation in terms of recursive filters over time.Comment: 12 pages, 2 figures, 2 tables. arXiv admin note: substantial text overlap with arXiv:1404.203

    A Computational Model of Visual Anisotropy

    Get PDF
    Visual anisotropy has been demonstrated in multiple tasks where performance differs between vertical, horizontal, and oblique orientations of the stimuli. We explain some principles of visual anisotropy by anisotropic smoothing, which is based on a variation on Koenderink's approach in [1]. We tested the theory by presenting Gaussian elongated luminance profiles and measuring the perceived orientations by means of an adjustment task. Our framework is based on the smoothing of the image with elliptical Gaussian kernels and it correctly predicted an illusory orientation bias towards the vertical axis. We discuss the scope of the theory in the context of other anisotropies in perception

    Provably scale-covariant networks from oriented quasi quadrature measures in cascade

    Full text link
    This article presents a continuous model for hierarchical networks based on a combination of mathematically derived models of receptive fields and biologically inspired computations. Based on a functional model of complex cells in terms of an oriented quasi quadrature combination of first- and second-order directional Gaussian derivatives, we couple such primitive computations in cascade over combinatorial expansions over image orientations. Scale-space properties of the computational primitives are analysed and it is shown that the resulting representation allows for provable scale and rotation covariance. A prototype application to texture analysis is developed and it is demonstrated that a simplified mean-reduced representation of the resulting QuasiQuadNet leads to promising experimental results on three texture datasets.Comment: 12 pages, 3 figures, 1 tabl

    A demonstration of 'broken' visual space

    Get PDF
    It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A > B > D yet also A < C < D) and hence no single one-to-one mapping between participants’ perceived space and any real 3D environment. Instead, factors that affect pairwise comparisons of distances dictate participants’ performance. These data contradict, more directly than previous experiments, the idea that the visual system builds and uses a coherent 3D internal representation of a scene

    No transfer of calibration between action and perception in learning a golf putting task

    Get PDF
    We assessed calibration of perception and action in the context of a golf putting task. Previous research has shown that right-handed novice golfers make rightward errors both in the perception of the perfect aiming line from the ball to the hole and in the putting action. Right-handed experts, however, produce accurate putting actions but tend to make leftward errors in perception. In two experiments, we examined whether these skill-related differences in directional error reflect transfer of calibration from action to perception. In the main experiment, three groups of right-handed novice participants followed a pretest, practice, posttest, retention test design. During the tests, directional error for the putting action and the perception of the perfect aiming line were determined. During practice, participants were provided only with verbal outcome feedback about directional error; one group trained perception and the second trained action, whereas the third group did not practice. Practice led to a relatively permanent annihilation of directional error, but these improvements in accuracy were specific to the trained task. Hence, no transfer of calibration occurred between perception and action. The findings are discussed within the two-visual-system model for perception and action, and implications for perceptual learning in action are raised

    Pointing errors in non-metric virtual environments

    Get PDF
    There have been suggestions that human navigation may depend on representations that have no metric, Euclidean interpretation but that hypothesis remains contentious. An alternative is that observers build a consistent 3D representation of space. Using immersive virtual reality, we measured the ability of observers to point to targets in mazes that had zero, one or three ‘wormholes’ – regions where the maze changed in configuration (invisibly). In one model, we allowed the configuration of the maze to vary to best explain the pointing data; in a second model we also allowed the local reference frame to be rotated through 90, 180 or 270 degrees. The latter model outperformed the former in the wormhole conditions, inconsistent with a Euclidean cognitive map

    Haptic Edge Detection Through Shear

    Get PDF
    Most tactile sensors are based on the assumption that touch depends on measuring pressure. However, the pressure distribution at the surface of a tactile sensor cannot be acquired directly and must be inferred from the deformation field induced by the touched object in the sensor medium. Currently, there is no consensus as to which components of strain are most informative for tactile sensing. Here, we propose that shape-related tactile information is more suitably recovered from shear strain than normal strain. Based on a contact mechanics analysis, we demonstrate that the elastic behavior of a haptic probe provides a robust edge detection mechanism when shear strain is sensed. We used a jamming-based robot gripper as a tactile sensor to empirically validate that shear strain processing gives accurate edge information that is invariant to changes in pressure, as predicted by the contact mechanics study. This result has implications for the design of effective tactile sensors as well as for the understanding of the early somatosensory processing in mammals

    Perceived Surface Slant Is Systematically Biased in the Actively-Generated Optic Flow

    Get PDF
    Humans make systematic errors in the 3D interpretation of the optic flow in both passive and active vision. These systematic distortions can be predicted by a biologically-inspired model which disregards self-motion information resulting from head movements (Caudek, Fantoni, & Domini 2011). Here, we tested two predictions of this model: (1) A plane that is stationary in an earth-fixed reference frame will be perceived as changing its slant if the movement of the observer's head causes a variation of the optic flow; (2) a surface that rotates in an earth-fixed reference frame will be perceived to be stationary, if the surface rotation is appropriately yoked to the head movement so as to generate a variation of the surface slant but not of the optic flow. Both predictions were corroborated by two experiments in which observers judged the perceived slant of a random-dot planar surface during egomotion. We found qualitatively similar biases for monocular and binocular viewing of the simulated surfaces, although, in principle, the simultaneous presence of disparity and motion cues allows for a veridical recovery of surface slant
    corecore